Goto

Collaborating Authors

 learnable matrix


proposes a new approach (R1), and the idea of error-correction mechanism is intuitive (R1), novel (R2) and smart

Neural Information Processing Systems

Is any special feature operation applied in ETN? & Does a larger K help? The motivation to compute affinity matrices & How to achieve the error diffusion. Please see Figure 1 in submission for example. Performance issues, including increased training burden and running time. Thanks for pointing out the mistake in real-time stylization, which will be corrected in revision.


A Tree Encoding

Neural Information Processing Systems

As mentioned in Sec. 3, a Tree-LSTM [ Meanwhile, it maintains a hidden state and a cell state analogously. As for data splits, we split each dataset into the train set and the test set for all tasks according to previous works. More details about train and test sizes can be seen in Tab. 4.


Conservation-preserved Fourier Neural Operator through Adaptive Correction

Liu, Chaoyu, Li, Yangming, Deng, Zhongying, Budd, Chris, Schönlieb, Carola-Bibiane

arXiv.org Artificial Intelligence

Fourier Neural Operators (FNOs) have recently emerged as a promising and efficient approach for learning the numerical solutions to partial differential equations (PDEs) from data. However, standard FNO often fails to preserve key conservation laws, such as mass conservation, momentum conservation, norm conservation, etc., which are crucial for accurately modeling physical systems. Existing methods for incorporating these conservation laws into Fourier neural operators are achieved by designing related loss function or incorporating post-processing method at the training time. None of them can both exactly and adaptively correct the outputs to satisfy conservation laws, and our experiments show that these methods can lead to inferior performance while preserving conservation laws. In this work, we propose a novel adaptive correction approach to ensure the conservation of fundamental quantities. Our method introduces a learnable matrix to adaptively adjust the solution to satisfy the conservation law during training. It ensures that the outputs exactly satisfy the goal conservation law and allow for more flexibility and adaptivity for the model to correct the outputs. We theoretically show that applying our adaptive correction to an unconstrained FNO yields a solution with data loss no worse than that of the best conservation-satisfying FNO. We compare our approach with existing methods on a range of representative PDEs. Experiment results show that our method consistently outperform other methods.